7 research outputs found

    A High-Fidelity Open Embodied Avatar with Lip Syncing and Expression Capabilities

    Full text link
    Embodied avatars as virtual agents have many applications and provide benefits over disembodied agents, allowing non-verbal social and interactional cues to be leveraged, in a similar manner to how humans interact with each other. We present an open embodied avatar built upon the Unreal Engine that can be controlled via a simple python programming interface. The avatar has lip syncing (phoneme control), head gesture and facial expression (using either facial action units or cardinal emotion categories) capabilities. We release code and models to illustrate how the avatar can be controlled like a puppet or used to create a simple conversational agent using public application programming interfaces (APIs). GITHUB link: https://github.com/danmcduff/AvatarSimComment: International Conference on Multimodal Interaction (ICMI 2019

    Learning-Based Techniques for Facial Animation

    No full text
    Thesis (Ph.D.)--University of Washington, 2019For decades, animation has been a popular storytelling technique. Traditional tools for creating animations are labor-intensive, requiring animators to painstakingly draw frames and motion curves by hand. An alternative workflow is to equip animators with direct real-time control over digital characters via performance, which offers a more immediate and efficient way to create animation. Even when using these existing expression transfer and lip sync methods, producing convincing facial animation in real-time is a challenging task. In this work, we present several deep learning techniques to model and automate the process of perceptually valid expression retargeting from humans to characters, real-time lip sync for animation, and building an emotionally aware embodied conversational agent. We also present the findings from user studies and some promising future directions in this domain
    corecore